Goto

Collaborating Authors

 information warfare


On the Military Applications of Large Language Models

Johansson, Satu, Riihonen, Taneli

arXiv.org Artificial Intelligence

-- In this paper, m ilitary use cases or applications and implementation thereof are considered for natural language processing and large language models, which have broken into fame with the invention of the generative pre - trained transformer (GPT) and the extensive foundation model pretraining done by OpenAI for ChatGPT and others . First, we interrogate a GPT - based language model (viz. Microsoft Copilot) to make it reveal its own knowledge about their potential military application s and then critically assess the information . Second, we study how commercial cloud services (viz. Microsoft Azure) could be used readily to build such applications and assess which of the m are feasible. We conclude that t he summarization and generative properties of language models directly facilitate many applications at large and other features may find particular uses . This paper was originally presented at the NATO Science and Technology Organization Symposium (ICMCIS) organized by ...


Propaganda and Information Dissemination in the Russo-Ukrainian War: Natural Language Processing of Russian and Western Twitter Narratives

Gouliev, Zaur

arXiv.org Artificial Intelligence

The conflict in Ukraine has been not only characterised by military engagement but also by a significant information war, with social media platforms like X, formerly known as Twitter playing an important role in shaping public perception. This article provides an analysis of tweets from propaganda accounts and trusted accounts collected from the onset of the war, February 2022 until the middle of May 2022 with n=40,000 total tweets. We utilise natural language processing and machine learning algorithms to assess the sentiment and identify key themes, topics and narratives across the dataset with human-in-the-loop (HITL) analysis throughout. Our findings indicate distinct strategies in how information is created, spread, and targeted at different audiences by both sides. Propaganda accounts frequently employ emotionally charged language and disinformation to evoke fear and distrust, whereas other accounts, primarily Western tend to focus on factual reporting and humanitarian aspects of the conflict. Clustering analysis reveals groups of accounts with similar behaviours, which we suspect indicates the presence of coordinated efforts. This research attempts to contribute to our understanding of the dynamics of information warfare and offers techniques for future studies on social media influence in military conflicts.


The Era of Faked CCTV Has Truly Arrived

WIRED

While Jamal Khashoggi was being carefully slaughtered in the Saudi consulate in Istanbul, a (clumsy and not much alike) man was trying out his shoes and clothes. The plan was for the imposter to appear on CCTV cameras while exiting the consulate and walk back to Khashoggi's residence. The plan eventually blew up, because the Turkish intelligence had already bugged the consulate and recorded exactly what had happened. This was one of the first attempts by state actors to manipulate other states (or publics) through CCTV footage. However, recent actions of the Iranian state television have taken this type of information warfare to a different level.


Weapons of the weak: Russia and AI-driven asymmetric warfare

#artificialintelligence

"Artificial intelligence is the future, not only for Russia, but for all humankind. It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world."1 "A people that no longer can believe anything cannot make up its mind. It is deprived not only of its capacity to act but also of its capacity to think and to judge. And with such a people you can then do what you please."2


Dark truth behind Jacinda 'smoking' video

#artificialintelligence

When a video purporting to show New Zealand Prime Minister Jacinda Ardern smoking drugs surfaced on social media in recent months, experts quickly dismissed it as a fake. The video, which was viewed and shared thousands of times, showed a woman smoking from what appeared to be a crack pipe. The PM's face had been superimposed using artificial intelligence. But the video, created for YouTube, was convincing enough to the many who shared it. It was the latest example of how disturbingly authentic-looking videos can blur the lines between reality and fantasy.


Joint Chiefs' Information Officer: U.S. Is Behind on Information Warfare. AI Can Help

#artificialintelligence

The United States needs a better strategy and more advanced tools for information operations, Lt. Gen. Dennis Crall, the Joint Staff's chief information officer, said Thursday. The government has become slower and less confident in its approach, a reticence it can't afford as artificial intelligence drastically increases the pace of messaging and information campaigns, said Crall, who is also the Joit Staff's director for command, control, communications, computers, and cyber. . "The speed at which machines and AI won some of these information campaigns changes the game drastically for us. If we study, if we're hesitant, if we don't have good left and right lateral limits, if every operation requires a new set of permissions...We're never going to compete." Crall made his remarks at the NDIA conference for Special Operations and Low Intensity Conflict, or SOLIC.


Navy revs up information warfare to stop enemy missiles, weapons

FOX News

If enemy cruise missiles, helicopter gunfire and even fighter-jet launched bombs close in on Navy surface ships at sea, service commanders could employ a range of time-sensitive layered defenses to include interceptor missiles, deck-mounted guns, electronic warfare tactics and even lasers. Navy preparations for this kind of scenario include the use of radar, long-range sensors and coordinated surveillance with surface, undersea and air assets - all seemingly operated for rapid response-enabled destruction of incoming enemy fire. Virtually all of these contingencies rely upon an often overlooked area of maritime warfare -- information warfare. Targeting data for pretty much any defensive weapons system would need to precede or inform fire control systems and certain kinds of sensor-weapons fusion. For this reason, the Navy is revving up its focus on training a new generation of information warriors to surge into future decades, hopefully, armed with the technical skills needed to counter enemy attacks today and 20 years from now.


Memes That Kill: The Future Of Information Warfare

#artificialintelligence

Memes and social networks have become weaponized, while many governments seem ill-equipped to understand the new reality of information warfare. How will we fight state-sponsored disinformation and propaganda in the future? In 2011, a university professor with a background in robotics presented an idea that seemed radical at the time. After conducting research backed by DARPA -- the same defense agency that helped spawn the internet -- Dr. Robert Finkelstein proposed the creation of a brand new arm of the US military, a "Meme Control Center." You'll learn about cybersecurity trends to watch and high-momentum startups with the potential to shape the future of security. In internet-speak the word "meme" often refers to an amusing picture that goes viral on social media. More broadly, however, a meme is any idea that spreads, whether that idea is true or false. It is this broader definition of meme that Finklestein had in mind when he proposed the Meme Control Center and his idea of "memetic warfare." From "Tutorial: Military Memetics," by Dr. Robert Finkelstein, presented at Social Media for Defense Summit, 2011 Basically, Dr. Finklestein's Meme Control Center would pump the internet full of "memes" that would benefit the national security of the United States. Finkelstein saw a future in which guns and bombs are replaced by rumor, digital fakery, and social engineering.


AI and CGI will transform information warfare, boost hoaxes, and escalate revenge porn

#artificialintelligence

Hoaxes and trickery are almost as old as human history. When the Roman Republic first conquered the Italian peninsula between 500-200 BC, it was known to send fake refugees into enemy cities to "[subvert] the enemy from within." "Pope Joan" was believed to be a woman who allegedly tricked her way into become pope in the Middle Ages by pretending to be a man -- but the entire story is now viewed as fake, a fictional yarn spun centuries after her purported reign. "Vortigern and Rowena," a play that debuted in 1798, was initially touted as a lost work of William Shakespeare -- but was in fact a forgery created by William Henry Ireland. And in the 1980s, the Soviet Union attempted to damage the United States' reputation and sow discord among its allies by spreading the myth that American scientists had created AIDS in a military laboratory, in an "active measures" disinformation campaign called "Operation INFEKTION."


Scotland to Play Host to Royal Navy Cyber War Games - Digit

#artificialintelligence

Known as Information Warrior 17, it will take place between 26 March and 6 April in various locations in and around Scotland. It is intended to drive the development of cutting edge technology in the Navy, including artificial intelligence, to put the UK at the forefront of Information Warfare. It will also test the defensive capabilities of ships and submarines against cyber attacks, which are as real a threat in the modern age as traditional weapons like rockets, missiles, and torpedoes. Admiral Sir Philip Jones, First Sea Lord, said: "We are living in a data-driven age in which our adversaries are already exploiting the potential of Information Warfare, and we must respond in kind." The Royal Navy has expressed an interest in using AI technology to develop a "Ship's Mind" at the centre of its warships, enhancing efficiency and allowing for fast and complex decisions to be made automatically.